27 research outputs found

    Compressed Sensing based Low-Power Multi-View Video Coding and Transmission in Wireless Multi-Path Multi-Hop Networks

    Get PDF
    Wireless Multimedia Sensor Network (WMSN) is increasingly being deployed for surveillance, monitoring and Internet-of-Things (IoT) sensing applications where a set of cameras capture and compress local images and then transmit the data to a remote controller. Such captured local images may also be compressed in a multi-view fashion to reduce the redundancy among overlapping views. In this paper, we present a novel paradigm for compressed-sensing-enabled multi-view coding and streaming in WMSN. We first propose a new encoding and decoding architecture for multi-view video systems based on Compressed Sensing (CS) principles, composed of cooperative sparsity-aware block-level rate-adaptive encoders, feedback channels and independent decoders. The proposed architecture leverages the properties of CS to overcome many limitations of traditional encoding techniques, specifically massive storage requirements and high computational complexity. Then, we present a modeling framework that exploits the aforementioned coding architecture. The proposed mathematical problem minimizes the power consumption by jointly determining the encoding rate and multi-path rate allocation subject to distortion and energy constraints. Extensive performance evaluation results show that the proposed framework is able to transmit multi-view streams with guaranteed video quality at lower power consumption

    COVER FEATURE Cloud-Assisted Smart Camera Networks for Energy-Efficient 3D Video Streaming

    No full text
    Despite current obstacles, smart camera networks that offload computationally intensive tasks to remote servers could allow energy-efficient 3D and multiview video encoding and delivery yet still ensure highquality multiple-device video streaming. Emerging 3D, multiview, and stereoscopic video services such as 3D cinema or free-viewpoint video can offer a considerably higher quality of experience than conventional 2D video. Smart camera technologies will soon make possible similarly novel services for mobile users, including 3D video capture and display, multiview wireless surveillance, and even glimpses of a 3D ocean through an underwater acoustic network. The tradeoff for innovations like these is computationa

    Joint Decoding of Independently Encoded Compressive Multi-view Video Streams

    No full text
    Abstract—We design a video coding and decoding framework for multi-view video systems based on compressed sensing imaging principles. Specifically, we focus on joint decoding of independently encoded compressively-sampled multi-view video streams. We first propose a novel distributed coding/decoding architecture designed to leverage inter-view correlation through joint decoding of the received compressively-sampled frames. At the encoder side, we select one view (referred to as K-view) as a reference for the other views (referred to as CS-views). The video frames of the CS-view are encoded and transmitted at a lower measurement rate than those of the selected K-view. At the decoder side, we generate side information to decode the CS-views as follows. First, each K-view frame is downsampled and reconstructed, and then compared with the initially reconstructed CS-view frame to obtain an estimate of the interview motion vector. The original CS-view measurements are then fused with the generated side image to reconstruct the CS-view frame through a newly designed algorithm that operates in the measurement domain. We also propose a blind video quality estimation method that can be used within the proposed framework to design channeladaptive rate control algorithms for quality-assured multi-view video streaming. We extensively evaluate the proposed scheme using real multi-view video traces. Results indicate that up to 1.6 dB improvement in terms of PSNR can be achieved by the proposed scheme compared with traditional independent decoding of CS frames. I

    Interview Motion Compensated Joint Decoding for Compressively Sampled Multiview Video Streams

    No full text
    In this paper, we design a novel multiview video encoding/decoding architecture for wirelessly multiview video streaming applications, e.g., 360 degrees video, Internet of Things (IoT) multimedia sensing, among others, based on distributed video coding and compressed sensing principles. Specifically, we focus on joint decoding of independently encoded compressively sampled multiview video streams. We first propose a novel side-information (SI) generation method based on a new interview motion compensation algorithm for multiview video joint reconstruction at the decoder end. Then, we propose a technique to fuse the received measurements with resampled measurements from the generated SI to perform the final recovery. Based on the proposed joint reconstruction method, we also derive a blind video quality estimation technique that can be used to adapt online the video encoding rate at the sensors to guarantee desired quality levels in multiview video streaming. Extensive simulation results of real multiview video traces show the effectiveness of the proposed fusion reconstruction method with the assistance of SI generated by an interview motion compensation method. Moreover, they also illustrate that the blind quality estimation algorithm can accurately estimate the reconstruction quality

    Compressed Sensing based Low-Power Multi-View Video Coding and Transmitting in Wireless Multi-Path Multi-Hop Networks

    Get PDF
    Wireless Multimedia Sensor Network (WMSN) is increasingly being deployed for surveillance, monitoring and Internet-of-Things (IoT) sensing applications where a set of cameras capture and compress local images and then transmit the data to a remote controller. Such captured local images may also be compressed in a multi-view fashion to reduce the redundancy among overlapping views. In this paper, we present a novel paradigm for compressed-sensing-enabled multi-view coding and streaming in WMSN. We first propose a new encoding and decoding architecture for multi-view video systems based on Compressed Sensing (CS) principles, composed of cooperative sparsity-aware block-level rate-adaptive encoders, feedback channels and independent decoders. The proposed architecture leverages the properties of CS to overcome many limitations of traditional encoding techniques, specifically massive storage requirements and high computational complexity. Then, we present a modeling framework that exploits the aforementioned coding architecture. The proposed mathematical problem minimizes the power consumption by jointly determining the encoding rate and multi-path rate allocation subject to distortion and energy constraints. Extensive performance evaluation results show that the proposed framework is able to transmit multi-view streams with guaranteed video quality at lower power consumption
    corecore